In medical image segmentation, it is often necessary to collect opinions from multiple experts to make the final decision. This clinical routine helps to mitigate individual bias. But when data is multiply annotated, standard deep learning models are often not applicable. In this paper, we propose a novel neural network framework, called Multi-Rater Prism (MrPrism) to learn the medical image segmentation from multiple labels. Inspired by the iterative half-quadratic optimization, the proposed MrPrism will combine the multi-rater confidences assignment task and calibrated segmentation task in a recurrent manner. In this recurrent process, MrPrism can learn inter-observer variability taking into account the image semantic properties, and finally converges to a self-calibrated segmentation result reflecting the inter-observer agreement. Specifically, we propose Converging Prism (ConP) and Diverging Prism (DivP) to process the two tasks iteratively. ConP learns calibrated segmentation based on the multi-rater confidence maps estimated by DivP. DivP generates multi-rater confidence maps based on the segmentation masks estimated by ConP. The experimental results show that by recurrently running ConP and DivP, the two tasks can achieve mutual improvement. The final converged segmentation result of MrPrism outperforms state-of-the-art (SOTA) strategies on a wide range of medical image segmentation tasks.
translated by 谷歌翻译
Different from the general visual classification, some classification tasks are more challenging as they need the professional categories of the images. In the paper, we call them expert-level classification. Previous fine-grained vision classification (FGVC) has made many efforts on some of its specific sub-tasks. However, they are difficult to expand to the general cases which rely on the comprehensive analysis of part-global correlation and the hierarchical features interaction. In this paper, we propose Expert Network (ExpNet) to address the unique challenges of expert-level classification through a unified network. In ExpNet, we hierarchically decouple the part and context features and individually process them using a novel attentive mechanism, called Gaze-Shift. In each stage, Gaze-Shift produces a focal-part feature for the subsequent abstraction and memorizes a context-related embedding. Then we fuse the final focal embedding with all memorized context-related embedding to make the prediction. Such an architecture realizes the dual-track processing of partial and global information and hierarchical feature interactions. We conduct the experiments over three representative expert-level classification tasks: FGVC, disease classification, and artwork attributes classification. In these experiments, superior performance of our ExpNet is observed comparing to the state-of-the-arts in a wide range of fields, indicating the effectiveness and generalization of our ExpNet. The code will be made publicly available.
translated by 谷歌翻译
Diffusion probabilistic model (DPM) recently becomes one of the hottest topic in computer vision. Its image generation application such as Imagen, Latent Diffusion Models and Stable Diffusion have shown impressive generation capabilities, which aroused extensive discussion in the community. Many recent studies also found it useful in many other vision tasks, like image deblurring, super-resolution and anomaly detection. Inspired by the success of DPM, we propose the first DPM based model toward general medical image segmentation tasks, which we named MedSegDiff. In order to enhance the step-wise regional attention in DPM for the medical image segmentation, we propose dynamic conditional encoding, which establishes the state-adaptive conditions for each sampling step. We further propose Feature Frequency Parser (FF-Parser), to eliminate the negative effect of high-frequency noise component in this process. We verify MedSegDiff on three medical segmentation tasks with different image modalities, which are optic cup segmentation over fundus images, brain tumor segmentation over MRI images and thyroid nodule segmentation over ultrasound images. The experimental results show that MedSegDiff outperforms state-of-the-art (SOTA) methods with considerable performance gap, indicating the generalization and effectiveness of the proposed model.
translated by 谷歌翻译
组织了伽马挑战赛,以鼓励AI模型从2D眼睛图像和3D光学相干断层扫描量的组合(如眼科医生)中筛选出青光眼。
translated by 谷歌翻译
在医学图像上,许多组织/病变可能模棱两可。这就是为什么一群临床专家通常会注释医疗细分以减轻个人偏见的原因。但是,这种临床常规也为机器学习算法的应用带来了新的挑战。如果没有确定的基础真相,将很难训练和评估深度学习模型。当从不同的级别收集注释时,一个共同的选择是多数票。然而,这样的策略忽略了分级专家之间的差异。在本文中,我们考虑使用校准的观察者间的不确定性来预测分割的任务。我们注意到,在临床实践中,医学图像分割通常用于帮助疾病诊断。受到这一观察的启发,我们提出了诊断优先的原则,该原则是将疾病诊断作为校准观察者间分段不确定性的标准。遵循这个想法,提出了一个名为诊断的诊断框架(DIFF)以估算从原始图像中进行诊断,从原始图像进行诊断。特别是,DIFF将首先学会融合多论者分段标签,以最大程度地提高单个地面真相疾病诊断表现。我们将融合的地面真相称为诊断第一基地真实(DF-GT)。我们验证了DIFF对三个不同的医学分割任务的有效性:对眼底图像的OD/OC分割,超声图像上的甲状腺结节分割以及皮肤镜图像上的皮肤病变分割。实验结果表明,拟议的DIFF能够显着促进相应的疾病诊断,这表现优于先前的最先进的多评论者学习方法。
translated by 谷歌翻译
青光眼会导致视力神经损害导致不可逆的视力丧失,并且无法治愈青光眼。OCT成像方式是评估青光眼损害的重要技术,因为它有助于量化底底结构。为了促进对青光眼的OCT辅助诊断领域中对AI技术的研究,我们在国际医学图像计算和计算机辅助干预(MICCAI)2022的国际会议上进行了青光眼OCT分析和层分段(目标)挑战(目标)挑战(目标)挑战。提供数据和相应的注释,以研究从OCT图像和青光眼分类研究层分割的研究人员。本文介绍了已发布的300个圆形八十个OCT图像,两个子任务的基线以及评估方法。可以在https://aistudio.baidu.com/aistudio/competition/competition/detail/230上访问目标挑战。
translated by 谷歌翻译
临床上,病变/组织的准确注释可以显着促进疾病诊断。例如,对眼底图像的视盘/杯/杯(OD/OC)的分割将有助于诊断青光眼诊断,皮肤镜图像上皮肤病变的分割有助于黑色素瘤诊断等。随着深度学习技术的发展,广泛的方法证明了病变/组织分割还可以促进自动疾病诊断模型。但是,现有方法是有限的,因为它们只能捕获图像中的静态区域相关性。受视觉变压器的全球和动态性质的启发,在本文中,我们提出了分割辅助诊断变压器(SeaTrans),以将分割知识转移到疾病诊断网络中。具体而言,我们首先提出了一种不对称的多尺度相互作用策略,以将每个单个低级诊断功能与多尺度分割特征相关联。然后,采用了一种称为海块的有效策略,以通过相关的分割特征使诊断特征生命。为了模拟分割诊断的相互作用,海块首先根据分段信息通过编码器嵌入诊断功能,然后通过解码器将嵌入的嵌入回到诊断功能空间中。实验结果表明,关于几种疾病诊断任务的海洋侵蚀超过了广泛的最新(SOTA)分割辅助诊断方法。
translated by 谷歌翻译
眼底图像的视盘(OD)和视杯(OC)的分割是青光眼诊断的重要基本任务。在临床实践中,通常有必要从多位专家那里收集意见,以获得最终的OD/OC注释。这种临床常规有助于减轻单个偏见。但是,当数据乘以注释时,标准深度学习模型将不适用。在本文中,我们提出了一个新型的神经网络框架,以从多评价者注释中学习OD/OC分割。分割结果通过迭代优化多评价专家的估计和校准OD/OC分割来自校准。这样,提出的方法可以实现这两个任务的相互改进,并最终获得精制的分割结果。具体而言,我们提出分化模型(DIVM)和收敛模型(CONM)分别处理这两个任务。 CONM基于DIVM提供的多评价专家图的原始图像。 DIVM从CONM提供的分割掩码中生成多评价者专家图。实验结果表明,通过经常运行CONM和DIVM,可以对结果进行自校准,从而超过一系列最新的(SOTA)多评价者分割方法。
translated by 谷歌翻译
With the rapid development of artificial intelligence (AI) in medical image processing, deep learning in color fundus photography (CFP) analysis is also evolving. Although there are some open-source, labeled datasets of CFPs in the ophthalmology community, large-scale datasets for screening only have labels of disease categories, and datasets with annotations of fundus structures are usually small in size. In addition, labeling standards are not uniform across datasets, and there is no clear information on the acquisition device. Here we release a multi-annotation, multi-quality, and multi-device color fundus image dataset for glaucoma analysis on an original challenge -- Retinal Fundus Glaucoma Challenge 2nd Edition (REFUGE2). The REFUGE2 dataset contains 2000 color fundus images with annotations of glaucoma classification, optic disc/cup segmentation, as well as fovea localization. Meanwhile, the REFUGE2 challenge sets three sub-tasks of automatic glaucoma diagnosis and fundus structure analysis and provides an online evaluation framework. Based on the characteristics of multi-device and multi-quality data, some methods with strong generalizations are provided in the challenge to make the predictions more robust. This shows that REFUGE2 brings attention to the characteristics of real-world multi-domain data, bridging the gap between scientific research and clinical application.
translated by 谷歌翻译
Color fundus photography and Optical Coherence Tomography (OCT) are the two most cost-effective tools for glaucoma screening. Both two modalities of images have prominent biomarkers to indicate glaucoma suspected. Clinically, it is often recommended to take both of the screenings for a more accurate and reliable diagnosis. However, although numerous algorithms are proposed based on fundus images or OCT volumes in computer-aided diagnosis, there are still few methods leveraging both of the modalities for the glaucoma assessment. Inspired by the success of Retinal Fundus Glaucoma Challenge (REFUGE) we held previously, we set up the Glaucoma grAding from Multi-Modality imAges (GAMMA) Challenge to encourage the development of fundus \& OCT-based glaucoma grading. The primary task of the challenge is to grade glaucoma from both the 2D fundus images and 3D OCT scanning volumes. As part of GAMMA, we have publicly released a glaucoma annotated dataset with both 2D fundus color photography and 3D OCT volumes, which is the first multi-modality dataset for glaucoma grading. In addition, an evaluation framework is also established to evaluate the performance of the submitted methods. During the challenge, 1272 results were submitted, and finally, top-10 teams were selected to the final stage. We analysis their results and summarize their methods in the paper. Since all these teams submitted their source code in the challenge, a detailed ablation study is also conducted to verify the effectiveness of the particular modules proposed. We find many of the proposed techniques are practical for the clinical diagnosis of glaucoma. As the first in-depth study of fundus \& OCT multi-modality glaucoma grading, we believe the GAMMA Challenge will be an essential starting point for future research.
translated by 谷歌翻译